Organizations are increasingly choosing to implement service-oriented architectures to integrate distributed, loosely coupled applications. These architectures are implemented as services, which typically use XMLbased messaging to communicate between service consumers and service providers across enterprise networks. We propose a scheme for caching fragments of service response messages to improve performance and service quality in service-oriented architectures. In our fragment caching scheme, we decompose responses into smaller fragments such that reusable components can be identified and cached in the XML routers of an XML overlay network within an enterprise network. Such caching mitigates processing requirements on providers and moves content closer to users, thus reducing bandwidth requirements on the network as well as improving service times. We describe the system architecture and caching algorithm details for our caching scheme, develop an analysis of the expected benefits of our scheme, and present the results of both simulation and case studybased experiments to show the validity and performance improvements provided by our caching scheme. Our simulation experimental results show an up to 60% reduction in bandwidth consumption and up to 50% response time improvement. Further, our case study experiments demonstrate that when there is no resource bottleneck, the cache-enabled case reduces average response times by 40%-50% and increases throughput by 150% compared to the no-cache and full message caching cases. In experiments contrasting fragment caching and full message caching, we found that full message caching provides benefits when the number of possible unique responses is low while the benefits of fragment caching increase as the number of possible unique responses increases. These experimental results clearly demonstrate the benefits of our approach.
E-commerce is growing to represent an increasing share of overall sales revenue, and online sales are expected to continue growing for the foreseeable future. This growth translates into increased activity on the supporting infrastructure, leading to a corresponding need to scale the infrastructure. This is difficult in an era of shrinking budgets and increasing functional requirements. Increasingly, IT managers are turning to virtualized cloud providers, drawn by the pay-for-use business model. As cloud computing becomes more popular, it is important for data center managers to accomplish more with fewer dollars (i.e., to increase the utilization of existing resources). Advanced request distribution techniques can help ensure both high utilization and smart request distribution, where requests are sent to the service resources best able to handle them. While such request distribution techniques have been applied to the web and application layers of the traditional online application architecture, request distribution techniques for the data layer have focused primarily on online transaction processing scenarios. However, online applications often have a significant read-intensive workload, where read operations constitute a significant percentage of workloads (up to 95 percent or higher).In this paper, we propose a cost-based database request distribution (C-DBRD) strategy, a policy to distribute requests, across a cluster of commercial, off-the-shelf databases, and discuss its implementation. We first develop the intuition behind our approach, and describe a high-level architecture for database request distribution. We then develop a theoretical model for database load computation, which we use to design a method for database request distribution and build a software implementation. Finally, following a design science methodology, we evaluate our artifacts through experimental evaluation. Our experiments, in the lab and in production-scale systems, show significant improvement of database layer resource utilization,demonstrating up to a 45 percent improvement over existing request distribution techniques
In general, OLAP applications are characterized by the rendering of enterprise data into multidimensional perspectives. This is achieved through complex, adhoc queries that frequently aggregate and consolidate data, often using statistical formulae. For example, a retail organization is often interested in comparing the total sales for the current year with the total sales for the previous year, or identifying sequences of 5 years or more when sales has increased within a 50-year envelope. It has been conjectured that relational database technology is well suited to fulfilling the needs of OLAP. This situation is somewhat analogous to the situation in the mid 1970s, when data processing experts would suggest special purpose algorithms to perform operations such as selection, projection, and join. warehouses. However, unrealistic restrictions are placed in these models, restrictions are imposed on either the number of attributes per dimension or the number of total measures representable in the cube. Moreover dimensions and measures are treated asymmetrically, leading to the inability of these models to answer particular types of queries with out requiring expensive redesign.
In the current corporate environment, business organizations have to reengineer their processes to ensure that process performance efficiencies are increased. This goal has lead to a recent surge of work on Business Process Reengineering (BPR) and Workflow Management. While a number of excellent papers have appeared on these topics, all of this work assumes that existing (AS-IS) processes are known. However, as is also widely acknowledged, coming up with AS-IS process models is a nontrivial task, that is currently practiced in a very ad-hoc fashion. With this motivation, in this paper, we postulate a number of algorithms to discover, i.e., come up with models of, AS-IS business processes. Such methods have been implemented as tools which can automatically extract AS-IS process models. To the best of our knowledge, no such work exists in the BPR and workflow domain. We back up our theoretical work with a case study that illustrates the applicability of these methods to large real-world problems. We draw on previous work on process modeling and grammar discovery. This work is a requisite first step in any reengineering endeavor. Our methods, if adopted, have the potential to severely reduce organizational costs of process redesign.